Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 9.873
Filtrar
1.
Sci Rep ; 14(1): 8071, 2024 04 05.
Artigo em Inglês | MEDLINE | ID: mdl-38580700

RESUMO

Over recent years, researchers and practitioners have encountered massive and continuous improvements in the computational resources available for their use. This allowed the use of resource-hungry Machine learning (ML) algorithms to become feasible and practical. Moreover, several advanced techniques are being used to boost the performance of such algorithms even further, which include various transfer learning techniques, data augmentation, and feature concatenation. Normally, the use of these advanced techniques highly depends on the size and nature of the dataset being used. In the case of fine-grained medical image sets, which have subcategories within the main categories in the image set, there is a need to find the combination of the techniques that work the best on these types of images. In this work, we utilize these advanced techniques to find the best combinations to build a state-of-the-art lumber disc herniation computer-aided diagnosis system. We have evaluated the system extensively and the results show that the diagnosis system achieves an accuracy of 98% when it is compared with human diagnosis.


Assuntos
Deslocamento do Disco Intervertebral , Humanos , Deslocamento do Disco Intervertebral/diagnóstico por imagem , Diagnóstico por Computador/métodos , Algoritmos , Aprendizado de Máquina , Computadores
2.
Med Image Anal ; 94: 103157, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574544

RESUMO

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Assuntos
Diagnóstico por Computador , Redes Neurais de Computação , Humanos , Diagnóstico por Computador/métodos , Endoscopia Gastrointestinal , Processamento de Imagem Assistida por Computador/métodos
3.
Respir Res ; 25(1): 177, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658980

RESUMO

BACKGROUND: Computer Aided Lung Sound Analysis (CALSA) aims to overcome limitations associated with standard lung auscultation by removing the subjective component and allowing quantification of sound characteristics. In this proof-of-concept study, a novel automated approach was evaluated in real patient data by comparing lung sound characteristics to structural and functional imaging biomarkers. METHODS: Patients with cystic fibrosis (CF) aged > 5y were recruited in a prospective cross-sectional study. CT scans were analyzed by the CF-CT scoring method and Functional Respiratory Imaging (FRI). A digital stethoscope was used to record lung sounds at six chest locations. Following sound characteristics were determined: expiration-to-inspiration (E/I) signal power ratios within different frequency ranges, number of crackles per respiratory phase and wheeze parameters. Linear mixed-effects models were computed to relate CALSA parameters to imaging biomarkers on a lobar level. RESULTS: 222 recordings from 25 CF patients were included. Significant associations were found between E/I ratios and structural abnormalities, of which the ratio between 200 and 400 Hz appeared to be most clinically relevant due to its relation with bronchiectasis, mucus plugging, bronchial wall thickening and air trapping on CT. The number of crackles was also associated with multiple structural abnormalities as well as regional airway resistance determined by FRI. Wheeze parameters were not considered in the statistical analysis, since wheezing was detected in only one recording. CONCLUSIONS: The present study is the first to investigate associations between auscultatory findings and imaging biomarkers, which are considered the gold standard to evaluate the respiratory system. Despite the exploratory nature of this study, the results showed various meaningful associations that highlight the potential value of automated CALSA as a novel non-invasive outcome measure in future research and clinical practice.


Assuntos
Biomarcadores , Fibrose Cística , Sons Respiratórios , Humanos , Estudos Transversais , Masculino , Feminino , Estudos Prospectivos , Adulto , Fibrose Cística/fisiopatologia , Fibrose Cística/diagnóstico por imagem , Adulto Jovem , Adolescente , Auscultação/métodos , Tomografia Computadorizada por Raios X/métodos , Pulmão/diagnóstico por imagem , Pulmão/fisiopatologia , Criança , Estudo de Prova de Conceito , Diagnóstico por Computador/métodos , Pessoa de Meia-Idade
4.
Int Ophthalmol ; 44(1): 191, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38653842

RESUMO

Optical Coherence Tomography (OCT) is widely recognized as the leading modality for assessing ocular retinal diseases, playing a crucial role in diagnosing retinopathy while maintaining a non-invasive modality. The increasing volume of OCT images underscores the growing importance of automating image analysis. Age-related diabetic Macular Degeneration (AMD) and Diabetic Macular Edema (DME) are the most common cause of visual impairment. Early detection and timely intervention for diabetes-related conditions are essential for preventing optical complications and reducing the risk of blindness. This study introduces a novel Computer-Aided Diagnosis (CAD) system based on a Convolutional Neural Network (CNN) model, aiming to identify and classify OCT retinal images into AMD, DME, and Normal classes. Leveraging CNN efficiency, including feature learning and classification, various CNN, including pre-trained VGG16, VGG19, Inception_V3, a custom from scratch model, BCNN (VGG16) 2 , BCNN (VGG19) 2 , and BCNN (Inception_V3) 2 , are developed for the classification of AMD, DME, and Normal OCT images. The proposed approach has been evaluated on two datasets, including a DUKE public dataset and a Tunisian private dataset. The combination of the Inception_V3 model and the extracted feature from the proposed custom CNN achieved the highest accuracy value of 99.53% in the DUKE dataset. The obtained results on DUKE public and Tunisian datasets demonstrate the proposed approach as a significant tool for efficient and automatic retinal OCT image classification.


Assuntos
Aprendizado Profundo , Degeneração Macular , Edema Macular , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Degeneração Macular/diagnóstico , Edema Macular/diagnóstico , Edema Macular/diagnóstico por imagem , Edema Macular/etiologia , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/diagnóstico por imagem , Redes Neurais de Computação , Retina/diagnóstico por imagem , Retina/patologia , Diagnóstico por Computador/métodos , Idoso , Feminino , Masculino
5.
Comput Biol Med ; 172: 108267, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38479197

RESUMO

Early detection of colon adenomatous polyps is pivotal in reducing colon cancer risk. In this context, accurately distinguishing between adenomatous polyp subtypes, especially tubular and tubulovillous, from hyperplastic variants is crucial. This study introduces a cutting-edge computer-aided diagnosis system optimized for this task. Our system employs advanced Supervised Contrastive learning to ensure precise classification of colon histopathology images. Significantly, we have integrated the Big Transfer model, which has gained prominence for its exemplary adaptability to visual tasks in medical imaging. Our novel approach discerns between in-class and out-of-class images, thereby elevating its discriminatory power for polyp subtypes. We validated our system using two datasets: a specially curated one and the publicly accessible UniToPatho dataset. The results reveal that our model markedly surpasses traditional deep convolutional neural networks, registering classification accuracies of 87.1% and 70.3% for the custom and UniToPatho datasets, respectively. Such results emphasize the transformative potential of our model in polyp classification endeavors.


Assuntos
Pólipos Adenomatosos , Pólipos do Colo , Humanos , Pólipos do Colo/diagnóstico por imagem , Redes Neurais de Computação , Diagnóstico por Computador/métodos , Diagnóstico por Imagem
6.
Comput Methods Programs Biomed ; 247: 108101, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38432087

RESUMO

BACKGROUND AND OBJECTIVE: Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms. METHODS: We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis. RESULTS: The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively. CONCLUSIONS: Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/diagnóstico , Detecção Precoce de Câncer , Mamografia/métodos , Redes Neurais de Computação , Diagnóstico por Computador/métodos , Mama/diagnóstico por imagem , Mama/patologia
7.
PLoS One ; 19(3): e0298527, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38466701

RESUMO

Lung cancer is one of the leading causes of cancer-related deaths worldwide. To reduce the mortality rate, early detection and proper treatment should be ensured. Computer-aided diagnosis methods analyze different modalities of medical images to increase diagnostic precision. In this paper, we propose an ensemble model, called the Mitscherlich function-based Ensemble Network (MENet), which combines the prediction probabilities obtained from three deep learning models, namely Xception, InceptionResNetV2, and MobileNetV2, to improve the accuracy of a lung cancer prediction model. The ensemble approach is based on the Mitscherlich function, which produces a fuzzy rank to combine the outputs of the said base classifiers. The proposed method is trained and tested on the two publicly available lung cancer datasets, namely Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) and LIDC-IDRI, both of these are computed tomography (CT) scan datasets. The obtained results in terms of some standard metrics show that the proposed method performs better than state-of-the-art methods. The codes for the proposed work are available at https://github.com/SuryaMajumder/MENet.


Assuntos
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Diagnóstico por Computador/métodos , Iraque
8.
Artif Intell Med ; 148: 102753, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38325931

RESUMO

BACKGROUND: In recent years, Computer Aided Diagnosis (CAD) has become an important research area that attracted a lot of researchers. In medical diagnostic systems, several attempts have been made to build and enhance CAD applications to avoid errors that can cause dangerously misleading medical treatments. The most exciting opportunity for promoting the performance of CAD system can be accomplished by integrating Artificial Intelligence (AI) in medicine. This allows the effective automation of traditional manual workflow, which is slow, inaccurate and affected by human errors. AIMS: This paper aims to provide a complete Computer Aided Disease Diagnosis (CAD2) strategy based on Machine Learning (ML) techniques that can help clinicians to make better medical decisions. METHODS: The proposed CAD2 consists of three main sequential phases, namely; (i) Outlier Rejection Phase (ORP), (ii) Feature Selection Phase (FSP), and (iii) Classification Phase (CP). ORP is implemented to reject outliers using new Outlier Rejection Technique (ORT) that contains two sequential stages called Fast Outlier Rejection (FOR) and Accurate Outlier Rejection (AOR). The most informative features are selected through FSP using Hybrid Selection Technique (HST). HST includes two main stages called Quick Selection Stage (QS2) using fisher score as a filter method and Precise Selection Stage (PS2) using a Hybrid Bio-inspired Optimization (HBO) technique as a wrapper method. Finally, actual diagnose takes place through CP, which relies on Ensemble Classification Technique (ECT). RESULTS: The proposed CAD2 has been tested experimentally against recent disease diagnostic strategies using two different datasets in which the first contains several diseases, while the second includes data for Covid-19 patients only. Experimental results have proven the high efficiency of the proposed CAD2 in terms of accuracy, error, precision, and recall compared with other competitors. Additionally, CAD2 strategy provides the best Wilcoxon signed rank test and Friedman test measurements against other strategies according to both datasets. CONCLUSION: It is concluded that CAD2 strategy based on ORP, FSP, and CP gave an accurate diagnosis compared to other strategies because it gave the highest accuracy and the lowest error and implementation time.


Assuntos
Inteligência Artificial , Diagnóstico por Computador , Humanos , Diagnóstico por Computador/métodos , Aprendizado de Máquina
9.
Tomography ; 10(2): 215-230, 2024 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-38393285

RESUMO

Diagnosing and screening for diabetic retinopathy is a well-known issue in the biomedical field. A component of computer-aided diagnosis that has advanced significantly over the past few years as a result of the development and effectiveness of deep learning is the use of medical imagery from a patient's eye to identify the damage caused to blood vessels. Issues with unbalanced datasets, incorrect annotations, a lack of sample images, and improper performance evaluation measures have negatively impacted the performance of deep learning models. Using three benchmark datasets of diabetic retinopathy, we conducted a detailed comparison study comparing various state-of-the-art approaches to address the effect caused by class imbalance, with precision scores of 93%, 89%, 81%, 76%, and 96%, respectively, for normal, mild, moderate, severe, and DR phases. The analyses of the hybrid modeling, including CNN analysis and SHAP model derivation results, are compared at the end of the paper, and ideal hybrid modeling strategies for deep learning classification models for automated DR detection are identified.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Diagnóstico por Computador/métodos , Benchmarking , Computadores
10.
Int Ophthalmol ; 44(1): 110, 2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38396074

RESUMO

PURPOSE: Early detection of retinal disorders using optical coherence tomography (OCT) images can prevent vision loss. Since manual screening can be time-consuming, tedious, and fallible, we present a reliable computer-aided diagnosis (CAD) software based on deep learning. Also, we made efforts to increase the interpretability of the deep learning methods, overcome their vague and black box nature, and also understand their behavior in the diagnosis. METHODS: We propose a novel method to improve the interpretability of the used deep neural network by embedding the rich semantic information of abnormal areas based on the ophthalmologists' interpretations and medical descriptions in the OCT images. Finally, we trained the classification network on a small subset of the online publicly available University of California San Diego (UCSD) dataset with an overall of 29,800 OCT images. RESULTS: The experimental results on the 1000 test OCT images show that the proposed method achieves the overall precision, accuracy, sensitivity, and f1-score of 97.6%, 97.6%, 97.6%, and 97.59%, respectively. Also, the heat map images provide a clear region of interest which indicates that the interpretability of the proposed method is increased dramatically. CONCLUSION: The proposed software can help ophthalmologists in providing a second opinion to make a decision, and primitive automated diagnoses of retinal diseases and even it can be used as a screening tool, in eye clinics. Also, the improvement of the interpretability of the proposed method causes to increase in the model generalization, and therefore, it will work properly on a wide range of other OCT datasets.


Assuntos
Aprendizado Profundo , Doenças Retinianas , Humanos , Tomografia de Coerência Óptica/métodos , Doenças Retinianas/diagnóstico , Diagnóstico por Computador/métodos , Computadores
11.
Microsc Res Tech ; 87(6): 1271-1285, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38353334

RESUMO

Skin is the exposed part of the human body that constantly protected from UV rays, heat, light, dust, and other hazardous radiation. One of the most dangerous illnesses that affect people is skin cancer. A type of skin cancer called melanoma starts in the melanocytes, which regulate the colour in human skin. Reducing the fatality rate from skin cancer requires early detection and diagnosis of conditions like melanoma. In this article, a Self-attention based cycle-consistent generative adversarial network optimized with Archerfish Hunting Optimization Algorithm adopted Melanoma Classification (SACCGAN-AHOA-MC-DI) from dermoscopic images is proposed. Primarily, the input Skin dermoscopic images are gathered via the dataset of ISIC 2019. Then, the input Skin dermoscopic images is pre-processed using adjusted quick shift phase preserving dynamic range compression (AQSP-DRC) for removing noise and increase the quality of Skin dermoscopic images. These pre-processed images are fed to the piecewise fuzzy C-means clustering (PF-CMC) for ROI region segmentation. The segmented ROI region is supplied to the Hexadecimal Local Adaptive Binary Pattern (HLABP) to extract the Radiomic features, like Grayscale statistic features (standard deviation, mean, kurtosis, and skewness) together with Haralick Texture features (contrast, energy, entropy, homogeneity, and inverse different moments). The extracted features are fed to self-attention based cycle-consistent generative adversarial network (SACCGAN) which classifies the skin cancers as Melanocytic nevus, Basal cell carcinoma, Actinic Keratosis, Benign keratosis, Dermatofibroma, Vascular lesion, Squamous cell carcinoma and melanoma. In general, SACCGAN not adapt any optimization modes to define the ideal parameters to assure accurate classification of skin cancer. Hence, Archerfish Hunting Optimization Algorithm (AHOA) is considered to maximize the SACCGAN classifier, which categorizes the skin cancer accurately. The proposed method attains 23.01%, 14.96%, and 45.31% higher accuracy and 32.16%, 11.32%, and 24.56% lesser computational time evaluated to the existing methods, like melanoma prediction method for unbalanced data utilizing optimized Squeeze Net through bald eagle search optimization (CNN-BES-MC-DI), hyper-parameter optimized CNN depending on Grey wolf optimization algorithm (CNN-GWOA-MC-DI), DEANN incited skin cancer finding depending on fuzzy c-means clustering (DEANN-MC-DI). RESEARCH HIGHLIGHTS: This manuscript, self-attention based cycle-consistent. SACCGAN-AHOA-MC-DI method is implemented in Python. (SACCGAN-AHOA-MC-DI) from dermoscopic images is proposed. Adjusted quick shift phase preserving dynamic range compression (AQSP-DRC). Removing noise and increase the quality of Skin dermoscopic images.


Assuntos
Ceratose Actínica , Melanoma , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico , Neoplasias Cutâneas/diagnóstico , Melanócitos/patologia , Algoritmos , Diagnóstico por Computador/métodos
12.
Med Eng Phys ; 124: 104101, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38418029

RESUMO

With the advancement of deep learning technology, computer-aided diagnosis (CAD) is playing an increasing role in the field of medical diagnosis. In particular, the emergence of Transformer-based models has led to a wider application of computer vision technology in the field of medical image processing. In the diagnosis of thyroid diseases, the diagnosis of benign and malignant thyroid nodules based on the TI-RADS classification is greatly influenced by the subjective judgment of ultrasonographers, and at the same time, it also brings an extremely heavy workload to ultrasonographers. To address this, we propose Swin-Residual Transformer (SRT) in this paper, which incorporates residual blocks and triplet loss into Swin Transformer (SwinT). It improves the sensitivity to global and localized features of thyroid nodules and better distinguishes small feature differences. In our exploratory experiments, SRT model achieves an accuracy of 0.8832 with an AUC of 0.8660, outperforming state-of-the-art convolutional neural network (CNN) and Transformer models. Also, ablation experiments have demonstrated the improved performance in the thyroid nodule classification task after introducing residual blocks and triple loss. These results validate the potential of the proposed SRT model to improve the diagnosis of thyroid nodules' ultrasound images. It also provides a feasible guarantee to avoid excessive puncture sampling of thyroid nodules in future clinical diagnosis.


Assuntos
Recuperação Demorada da Anestesia , Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Nódulo da Glândula Tireoide/patologia , Ultrassonografia , Diagnóstico por Computador/métodos
13.
Radiol Artif Intell ; 6(2): e230327, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38197795

RESUMO

Tuberculosis, which primarily affects developing countries, remains a significant global health concern. Since the 2010s, the role of chest radiography has expanded in tuberculosis triage and screening beyond its traditional complementary role in the diagnosis of tuberculosis. Computer-aided diagnosis (CAD) systems for tuberculosis detection on chest radiographs have recently made substantial progress in diagnostic performance, thanks to deep learning technologies. The current performance of CAD systems for tuberculosis has approximated that of human experts, presenting a potential solution to the shortage of human readers to interpret chest radiographs in low- or middle-income, high-tuberculosis-burden countries. This article provides a critical appraisal of developmental process reporting in extant CAD software for tuberculosis, based on the Checklist for Artificial Intelligence in Medical Imaging. It also explores several considerations to scale up CAD solutions, encompassing manufacturer-independent CAD validation, economic and political aspects, and ethical concerns, as well as the potential for broadening radiography-based diagnosis to other nontuberculosis diseases. Collectively, CAD for tuberculosis will emerge as a representative deep learning application, catalyzing advances in global health and health equity. Keywords: Computer-aided Diagnosis (CAD), Conventional Radiography, Thorax, Lung, Machine Learning Supplemental material is available for this article. © RSNA, 2024.


Assuntos
Inteligência Artificial , Tuberculose , Humanos , Saúde Global , Software , Diagnóstico por Computador/métodos
14.
Ultrasound Med Biol ; 50(4): 509-519, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38267314

RESUMO

OBJECTIVE: The main objective of this study was to build a rich and high-quality thyroid ultrasound image database (TUD) for computer-aided diagnosis (CAD) systems to support accurate diagnosis and prognostic modeling of thyroid disorders. Because most of the raw thyroid ultrasound images contain artificial markers, which seriously affect the robustness of CAD systems because of their strong prior location information, we propose a marker mask inpainting (MMI) method to erase artificial markers and improve image quality. METHODS: First, a set of thyroid ultrasound images were collected from the General Hospital of the Northern Theater Command. Then, two modules were designed in MMI, namely, the marker detection (MD) module and marker erasure (ME) module. The MD module detects all markers in the image and stores them in a binary mask. According to the binary mask, the ME module erases the markers and generates an unmarked image. Finally, a new TUD based on the marked images and unmarked images was built. The TUD is carefully annotated and statistically analyzed by professional physicians to ensure accuracy and consistency. Moreover, several normal thyroid gland images and some ancillary information on benign and malignant nodules are provided. RESULTS: Several typical segmentation models were evaluated on the TUD. The experimental results revealed that our TUD can facilitate the development of more accurate CAD systems for the analysis of thyroid nodule-related lesions in ultrasound images. The effectiveness of our MMI method was determined in quantitative experiments. CONCLUSION: The rich and high-quality resource TUD promotes the development of more effective diagnostic and treatment methods for thyroid diseases. Furthermore, MMI for erasing artificial markers and generating unmarked images is proposed to improve the quality of thyroid ultrasound images. Our TUD database is available at https://github.com/NEU-LX/TUD-Datebase.


Assuntos
Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/patologia , Diagnóstico por Computador/métodos , Ultrassonografia/métodos , Pesquisa
15.
J Xray Sci Technol ; 32(1): 53-68, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38189730

RESUMO

BACKGROUND: With the rapid growth of Deep Neural Networks (DNN) and Computer-Aided Diagnosis (CAD), more significant works have been analysed for cancer related diseases. Skin cancer is the most hazardous type of cancer that cannot be diagnosed in the early stages. OBJECTIVE: The diagnosis of skin cancer is becoming a challenge to dermatologists as an abnormal lesion looks like an ordinary nevus at the initial stages. Therefore, early identification of lesions (origin of skin cancer) is essential and helpful for treating skin cancer patients effectively. The enormous development of automated skin cancer diagnosis systems significantly supports dermatologists. METHODS: This paper performs a classification of skin cancer by utilising various deep-learning frameworks after resolving the class Imbalance problem in the ISIC-2019 dataset. A fine-tuned ResNet-50 model is used to evaluate the performance of original data, augmented data, and after by adding the focal loss. Focal loss is the best technique to solve overfitting problems by assigning weights to hard misclassified images. RESULTS: Finally, augmented data with focal loss is given a good classification performance with 98.85% accuracy, 95.52% precision, and 95.93% recall. Matthews Correlation coefficient (MCC) is the best metric to evaluate the quality of multi-class images. It has given outstanding performance by using augmented data and focal loss.


Assuntos
Aprendizado Profundo , Nevo , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Nevo/patologia , Redes Neurais de Computação , Diagnóstico por Computador/métodos
16.
J Am Coll Surg ; 238(5): 856-860, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38258847

RESUMO

BACKGROUND: We previously reported the successful development of a computer-aided diagnosis (CAD) system for preventing retained surgical sponges with deep learning using training data, including composite and simulated radiographs. In this study, we evaluated the efficacy of the CAD system in a clinical setting. STUDY DESIGN: A total of 1,053 postoperative radiographs obtained from patients 20 years of age or older who underwent surgery were evaluated. We implemented a foreign object detection application software on the portable radiographic device used in the operating room to detect retained surgical sponges. The results of the CAD system diagnosis were prospectively collected. RESULTS: Among the 1,053 images, the CAD system detected possible retained surgical items in 150 images. Specificity was 85.8%, which is similar to the data obtained during the development of the software. CONCLUSIONS: The validation of a CAD system using deep learning in a clinical setting showed similar efficacy as during the development of the system. These results suggest that the CAD system can contribute to the establishment of a more effective protocol than the current standard practice for preventing the retention of surgical items.


Assuntos
Corpos Estranhos , Software , Humanos , Diagnóstico por Computador/métodos , Radiografia , Corpos Estranhos/diagnóstico por imagem , Corpos Estranhos/prevenção & controle , Corpos Estranhos/cirurgia , Computadores , Sensibilidade e Especificidade
17.
PLoS One ; 19(1): e0295951, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38165976

RESUMO

The integration of artificial intelligence (AI) in diagnosing diabetic retinopathy, a major contributor to global vision impairment, is becoming increasingly pronounced. Notably, the detection of vision-threatening diabetic retinopathy (VTDR) has been significantly fortified through automated techniques. Traditionally, the reliance on manual analysis of retinal images, albeit slow and error-prone, constituted the conventional approach. Addressing this, our study introduces a novel methodology that amplifies the robustness and precision of the detection process. This is complemented by the groundbreaking Hierarchical Block Attention (HBA) and HBA-U-Net architecture, which notably propel attention mechanisms in image segmentation. This innovative model refines image processing without imposing excessive computational demands by honing in on individual pixel intricacies, spatial relationships, and channel-specific attention. Building upon this innovation, our proposed method employs a multi-stage strategy encompassing data pre-processing, feature extraction via a hybrid CNN-SVD model, and classification employing an amalgamation of Improved Support Vector Machine-Radial Basis Function (ISVM-RBF), DT, and KNN techniques. Rigorously tested on the IDRiD dataset classified into five severity tiers, the hybrid model yields remarkable performance, achieving a 99.18% accuracy, 98.15% sensitivity, and 100% specificity in VTDR detection, thus surpassing existing methods. These results underscore a more potent avenue for diagnosing and addressing this crucial ocular condition while underscoring AI's transformative potential in medical care, particularly in ophthalmology.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Inteligência Artificial , Retinopatia Diabética/diagnóstico por imagem , Máquina de Vetores de Suporte , Interpretação de Imagem Assistida por Computador/métodos , Diagnóstico por Computador/métodos
18.
Radiol Phys Technol ; 17(1): 195-206, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38165579

RESUMO

Somatostatin receptor scintigraphy (SRS) is an essential examination for the diagnosis of neuroendocrine tumors (NETs). This study developed a method to individually optimize the display of whole-body SRS images using a deep convolutional neural network (DCNN) reconstructed by transfer learning of a DCNN constructed using Gallium-67 (67Ga) images. The initial DCNN was constructed using U-Net to optimize the display of 67Ga images (493 cases/986 images), and a DCNN with transposed weight coefficients was reconstructed for the optimization of whole-body SRS images (133 cases/266 images). A DCNN was constructed for each observer using reference display conditions estimated in advance. Furthermore, to eliminate information loss in the original image, a grayscale linear process is performed based on the DCNN output image to obtain the final linearly corrected DCNN (LcDCNN) image. To verify the usefulness of the proposed method, an observer study using a paired-comparison method was conducted on the original, reference, and LcDCNN images of 15 cases with 30 images. The paired comparison method showed that in most cases (29/30), the LcDCNN images were significantly superior to the original images in terms of display conditions. When comparing the LcDCNN and reference images, the number of LcDCNN and reference images that were superior to each other in the display condition was 17 and 13, respectively, and in both cases, 6 of these images showed statistically significant differences. The optimized SRS images obtained using the proposed method, while reflecting the observer's preference, were superior to the conventional manually adjusted images.


Assuntos
Redes Neurais de Computação , Receptores de Somatostatina , Diagnóstico por Computador/métodos , Tomografia Computadorizada por Raios X , Cintilografia
19.
Comput Methods Programs Biomed ; 244: 107999, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38194766

RESUMO

BACKGROUND AND OBJECTIVE: Thyroid nodule segmentation is a crucial step in the diagnostic procedure of physicians and computer-aided diagnosis systems. However, prevailing studies often treat segmentation and diagnosis as independent tasks, overlooking the intrinsic relationship between these processes. The sequencial steps of these independent tasks in computer-aided diagnosis systems may lead to the accumulation of errors. Therefore, it is worth combining them as a whole by exploring the relationship between thyroid nodule segmentation and diagnosis. According to the diagnostic procedure of thyroid imaging reporting and data system (TI-RADS), the assessment of shape and margin characteristics is the prerequisite for radiologists to discriminate benign and malignant thyroid nodules. Inspired by TI-RADS, this study aims to integrate these tasks into a cohesive process, leveraging the insights from TI-RADS, thereby enhancing the accuracy and interpretability of thyroid nodule analysis. METHODS: Specifically, this paper proposes a shape-margin knowledge augmented network (SkaNet) for simultaneous thyroid nodule segmentation and diagnosis. Due to the visual feature similarities between segmentation and diagnosis, SkaNet shares visual features in the feature extraction stage and then utilizes a dual-branch architecture to perform thyroid nodule segmentation and diagnosis tasks respectively. In the shared feature extraction, the combination of convolutional feature maps and self-attention maps allows to exploitation of both local information and global patterns in thyroid nodule images. To enhance effective discriminative features, an exponential mixture module is introduced, combining convolutional feature maps and self-attention maps through exponential weighting. Then, SkaNet is jointly optimized by a knowledge augmented multi-task loss function with a constraint penalty term. The constraint penalty term embeds shape and margin characteristics through numerical computations, establishing a vital relationship between thyroid nodule diagnosis results and segmentation masks. RESULTS: We evaluate the proposed approach on a public thyroid ultrasound dataset (DDTI) and a locally collected thyroid ultrasound dataset. The experimental results reveal the value of our contributions and demonstrate that our approach can yield significant improvements compared with state-of-the-art counterparts. CONCLUSIONS: SkaNet highlights the potential of combining thyroid nodule segmentation and diagnosis with knowledge augmented learning into a unified framework, which captures the key shape and margin characteristics for discriminating benign and malignant thyroid nodules. Our findings suggest promising insights for advancing computer-aided diagnosis joint with segmentation.


Assuntos
Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Nódulo da Glândula Tireoide/patologia , Ultrassonografia/métodos , Diagnóstico por Computador/métodos , Diagnóstico Diferencial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...